3 research outputs found

    A novel image enhancement method for palm vein images

    Get PDF
    Palm vein images usually suffer from low contrast due to skin surface scattering the radiance of NIR light and image sensor limitations, hence require employing various techniques to enhance the contrast of the image prior to feature extraction. This paper presents a novel image enhancement method referred to as Multiple Overlapping Tiles (MOT) which adaptively stretches the local contrast of palm vein images using multiple layers of overlapping image tiles. The experiments conducted on the CASIA palm vein image dataset demonstrate that the MOT method retains the finer subspace details of vein images which allows excellent feature detection and matching with SIFT and RootSIFT features. Results on existing palm vein recognition systems demonstrate that the proposed MOT method delivers lower EER values outperforming other existing palm vein image enhancement methods

    A Filtering Method for SIFT based Palm Vein Recognition

    Get PDF
    A key issue with palm vein images is that slight movements of fingers and the thumb or changes in the hand pose can stretch the skin in different areas and alter the vein patterns. This can produce palm vein images with an infinite number of variations for a given subject. This paper presents a novel filtering method for SIFT-based feature matching referred to as the Mean and Median Distance (MMD) Filter, which checks the difference of keypoint coordinates and calculates the mean and the median in each direction in order to filter out the incorrect matches. Experiments conducted on the 850nm subset of the CASIA dataset show that the proposed MMD filter can maintain correct points and reduce false positives that were detected by other filtering methods. Comparison against existing SIFT-based palm vein recognition systems demonstrates that the proposed MMD filter produces excellent performance recording lower Equal Error Rate (EER) values

    Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control

    Get PDF
    Given the complexity of the human facial anatomy, animating facial expressions and lip movements for speech is a very time-consuming and tedious task. In this paper, a new text-to-animation framework for facial animation synthesis is proposed. The core idea is to improve the expressiveness of lip-sync animation by incorporating facial expressions in 3D animated characters. This idea is realized as a plug-in in Autodesk Maya, one of the most popular animation platforms in the industry, such that professional animators can effectively apply the method in their existing work. We evaluate the proposed system by conducting two sets of surveys, in which both novice and experienced users participate in the user study to provide feedback and evaluations from different perspectives. The results of the survey highlights the effectiveness of creating realistic facial animations with the use of emotion expressions. Video demos of the synthesized animations are available online at https://git.io/fx5U
    corecore